113 research outputs found

    A Post-New Perspective Analysis of Darrell Bock\u27s Views Pertaining to Torah Observance

    Get PDF
    In the twenty-first century, Messianic Jews generally fall into one of two broad categories: Those who are Torah observant, and those who consider the laws contained in the Mosaic Law as valuable, but optional. Those who hold the latter position may be Torah observant as an evangelistic strategy, as a way of practicing contextualization, or as a means of doing outreach to ethnic Jews. Nonetheless, they do not believe that sustained Torah observance is a necessary/vital component of their Christian faith. Observant Messianic Jews would argue that when Jesus initiated the New Covenant, he did not nullify/abrogate the Mosaic Covenant, and they maintain an observant Jewish life as an act of covenant faithfulness/fidelity to the Mosaic Covenant. Dr. Darrell Bock is one of the foremost proponents of Progressive Dispensationalism. Bock believes that Messianic Jews “can” be Torah observant, but they are not under any obligation to be. The researcher, by way of contrast, holds a Post-New Perspective position, and believes that the Mosaic Law should be adhered to out of faithfulness to the Mosaic Covenant, not missionary expediency. The purpose of this research is to examine the works of Dr. Darrell Bock, and to interact with his positions regarding the matter of whether twenty-first century Messianic Jews should be Torah observant or not. This dissertation employs a bibliographic and textual approach to the theological question of whether Messianic Jews should continue to be Torah observant

    Performance Measurements of Supercomputing and Cloud Storage Solutions

    Full text link
    Increasing amounts of data from varied sources, particularly in the fields of machine learning and graph analytics, are causing storage requirements to grow rapidly. A variety of technologies exist for storing and sharing these data, ranging from parallel file systems used by supercomputers to distributed block storage systems found in clouds. Relatively few comparative measurements exist to inform decisions about which storage systems are best suited for particular tasks. This work provides these measurements for two of the most popular storage technologies: Lustre and Amazon S3. Lustre is an open-source, high performance, parallel file system used by many of the largest supercomputers in the world. Amazon's Simple Storage Service, or S3, is part of the Amazon Web Services offering, and offers a scalable, distributed option to store and retrieve data from anywhere on the Internet. Parallel processing is essential for achieving high performance on modern storage systems. The performance tests used span the gamut of parallel I/O scenarios, ranging from single-client, single-node Amazon S3 and Lustre performance to a large-scale, multi-client test designed to demonstrate the capabilities of a modern storage appliance under heavy load. These results show that, when parallel I/O is used correctly (i.e., many simultaneous read or write processes), full network bandwidth performance is achievable and ranged from 10 gigabits/s over a 10 GigE S3 connection to 0.35 terabits/s using Lustre on a 1200 port 10 GigE switch. These results demonstrate that S3 is well-suited to sharing vast quantities of data over the Internet, while Lustre is well-suited to processing large quantities of data locally.Comment: 5 pages, 4 figures, to appear in IEEE HPEC 201

    Enabling On-Demand Database Computing with MIT SuperCloud Database Management System

    Full text link
    The MIT SuperCloud database management system allows for rapid creation and flexible execution of a variety of the latest scientific databases, including Apache Accumulo and SciDB. It is designed to permit these databases to run on a High Performance Computing Cluster (HPCC) platform as seamlessly as any other HPCC job. It ensures the seamless migration of the databases to the resources assigned by the HPCC scheduler and centralized storage of the database files when not running. It also permits snapshotting of databases to allow researchers to experiment and push the limits of the technology without concerns for data or productivity loss if the database becomes unstable.Comment: 6 pages; accepted to IEEE High Performance Extreme Computing (HPEC) conference 2015. arXiv admin note: text overlap with arXiv:1406.492

    Discrimination of Chiral Guests by Chiral Channels: Variable Temperature Studies by SXRD and Solid State 13C NMR of the Deoxycholic Acid Complexes of Camphorquinone and Endo-3-Bromocamphor

    Get PDF
    3a,12a-Dihydroxy-5b-cholan-24-oic acid (deoxycholic acid DCA) is able to discriminate between the R- and S-enantiomers of camphorquinone and endo-(1)-3-bromocamphor and select only the S-enantiomers from a racemic mixture. DCA forms novel well ordered 1:1 adducts with (1S)-(1)-camphorquinone and (1S)-endo-(-)-3-bromocamphor, both of which have been characterized by single crystal X-ray diffraction SXRD). When DCA is cocrystallized with (RS)-camphorquinone and (RS)-endo-3-bromocamphor,1:1 adducts of the S-enantiomers are produced together with crystals of the free racemic guest. In contrast, in the absence of (1S)-(1)-camphorquinone, DCA forms a 2:1 adduct with (1R)-(2)-camphorquinone. In this 2:1 adduct the guest is disordered at ambient temperature and undergoes a phase change in the region 160–130 K similar to that observed for the ferrocene adduct, but with only partial ordering of the guest. The SXRD structure of the low temperature form and the variable temperature 13C CP/MAS NMR are reported. Cocrystallizing DCA with (1R)-endo-(1)-3-bromocamphor gives the free guest and a glassy solid

    Lustre, Hadoop, Accumulo

    Full text link
    Data processing systems impose multiple views on data as it is processed by the system. These views include spreadsheets, databases, matrices, and graphs. There are a wide variety of technologies that can be used to store and process data through these different steps. The Lustre parallel file system, the Hadoop distributed file system, and the Accumulo database are all designed to address the largest and the most challenging data storage problems. There have been many ad-hoc comparisons of these technologies. This paper describes the foundational principles of each technology, provides simple models for assessing their capabilities, and compares the various technologies on a hypothetical common cluster. These comparisons indicate that Lustre provides 2x more storage capacity, is less likely to loose data during 3 simultaneous drive failures, and provides higher bandwidth on general purpose workloads. Hadoop can provide 4x greater read bandwidth on special purpose workloads. Accumulo provides 10,000x lower latency on random lookups than either Lustre or Hadoop but Accumulo's bulk bandwidth is 10x less. Significant recent work has been done to enable mix-and-match solutions that allow Lustre, Hadoop, and Accumulo to be combined in different ways.Comment: 6 pages; accepted to IEEE High Performance Extreme Computing conference, Waltham, MA, 201

    Lessons Learned from a Decade of Providing Interactive, On-Demand High Performance Computing to Scientists and Engineers

    Full text link
    For decades, the use of HPC systems was limited to those in the physical sciences who had mastered their domain in conjunction with a deep understanding of HPC architectures and algorithms. During these same decades, consumer computing device advances produced tablets and smartphones that allow millions of children to interactively develop and share code projects across the globe. As the HPC community faces the challenges associated with guiding researchers from disciplines using high productivity interactive tools to effective use of HPC systems, it seems appropriate to revisit the assumptions surrounding the necessary skills required for access to large computational systems. For over a decade, MIT Lincoln Laboratory has been supporting interactive, on-demand high performance computing by seamlessly integrating familiar high productivity tools to provide users with an increased number of design turns, rapid prototyping capability, and faster time to insight. In this paper, we discuss the lessons learned while supporting interactive, on-demand high performance computing from the perspectives of the users and the team supporting the users and the system. Building on these lessons, we present an overview of current needs and the technical solutions we are building to lower the barrier to entry for new users from the humanities, social, and biological sciences.Comment: 15 pages, 3 figures, First Workshop on Interactive High Performance Computing (WIHPC) 2018 held in conjunction with ISC High Performance 2018 in Frankfurt, German

    Energy-sensitive GaSb/AlAsSb separate absorption and multiplication avalanche photodiodes for X-Ray and gamma-ray detection

    Get PDF
    Demonstrated are antimony‐based (Sb‐based) separate absorption and multiplication avalanche photodiodes (SAM‐APDs) for X‐ray and gamma‐ray detection, which are composed of GaSb absorbers and large bandgap AlAsSb multiplication regions in order to enhance the probability of stopping high‐energy photons while drastically suppressing the minority carrier diffusion. Well‐defined X‐ray and gamma‐ray photopeaks are observed under exposure to 241Am radioactive sources, demonstrating the desirable energy‐sensitive detector performance. Spectroscopic characterizations show a significant improvement of measured energy resolution due to reduced high‐peak electric field in the absorbers and suppressed nonradiative recombination on surfaces. Additionally, the GaSb/AlAsSb SAM‐APDs clearly exhibit energy response linearity up to 59.5 keV with a minimum full‐width half‐maximum of 1.283 keV. A further analysis of the spectroscopic measurement suggests that the device performance is intrinsically limited by the noise from the readout electronics rather than that from the photodiodes. This study provides a first understanding of Sb‐based energy‐sensitive SAM‐APDs and paves the way to achieving efficient detection of high‐energy photons for X‐ray and gamma‐ray spectroscopy

    Measuring the Impact of Spectre and Meltdown

    Full text link
    The Spectre and Meltdown flaws in modern microprocessors represent a new class of attacks that have been difficult to mitigate. The mitigations that have been proposed have known performance impacts. The reported magnitude of these impacts varies depending on the industry sector and expected workload characteristics. In this paper, we measure the performance impact on several workloads relevant to HPC systems. We show that the impact can be significant on both synthetic and realistic workloads. We also show that the performance penalties are difficult to avoid even in dedicated systems where security is a lesser concern
    corecore